12 research outputs found

    Large-Scale Optical Neural Networks based on Photoelectric Multiplication

    Full text link
    Recent success in deep neural networks has generated strong interest in hardware accelerators to improve speed and energy consumption. This paper presents a new type of photonic accelerator based on coherent detection that is scalable to large (N106N \gtrsim 10^6) networks and can be operated at high (GHz) speeds and very low (sub-aJ) energies per multiply-and-accumulate (MAC), using the massive spatial multiplexing enabled by standard free-space optical components. In contrast to previous approaches, both weights and inputs are optically encoded so that the network can be reprogrammed and trained on the fly. Simulations of the network using models for digit- and image-classification reveal a "standard quantum limit" for optical neural networks, set by photodetector shot noise. This bound, which can be as low as 50 zJ/MAC, suggests performance below the thermodynamic (Landauer) limit for digital irreversible computation is theoretically possible in this device. The proposed accelerator can implement both fully-connected and convolutional networks. We also present a scheme for back-propagation and training that can be performed in the same hardware. This architecture will enable a new class of ultra-low-energy processors for deep learning.Comment: Text: 10 pages, 5 figures, 1 table. Supplementary: 8 pages, 5, figures, 2 table

    Single chip photonic deep neural network with accelerated training

    Full text link
    As deep neural networks (DNNs) revolutionize machine learning, energy consumption and throughput are emerging as fundamental limitations of CMOS electronics. This has motivated a search for new hardware architectures optimized for artificial intelligence, such as electronic systolic arrays, memristor crossbar arrays, and optical accelerators. Optical systems can perform linear matrix operations at exceptionally high rate and efficiency, motivating recent demonstrations of low latency linear algebra and optical energy consumption below a photon per multiply-accumulate operation. However, demonstrating systems that co-integrate both linear and nonlinear processing units in a single chip remains a central challenge. Here we introduce such a system in a scalable photonic integrated circuit (PIC), enabled by several key advances: (i) high-bandwidth and low-power programmable nonlinear optical function units (NOFUs); (ii) coherent matrix multiplication units (CMXUs); and (iii) in situ training with optical acceleration. We experimentally demonstrate this fully-integrated coherent optical neural network (FICONN) architecture for a 3-layer DNN comprising 12 NOFUs and three CMXUs operating in the telecom C-band. Using in situ training on a vowel classification task, the FICONN achieves 92.7% accuracy on a test set, which is identical to the accuracy obtained on a digital computer with the same number of weights. This work lends experimental evidence to theoretical proposals for in situ training, unlocking orders of magnitude improvements in the throughput of training data. Moreover, the FICONN opens the path to inference at nanosecond latency and femtojoule per operation energy efficiency.Comment: 21 pages, 10 figures. Comments welcom

    Deep Learning with Coherent VCSEL Neural Networks

    Full text link
    Deep neural networks (DNNs) are reshaping the field of information processing. With their exponential growth challenging existing electronic hardware, optical neural networks (ONNs) are emerging to process DNN tasks in the optical domain with high clock rates, parallelism and low-loss data transmission. However, to explore the potential of ONNs, it is necessary to investigate the full-system performance incorporating the major DNN elements, including matrix algebra and nonlinear activation. Existing challenges to ONNs are high energy consumption due to low electro-optic (EO) conversion efficiency, low compute density due to large device footprint and channel crosstalk, and long latency due to the lack of inline nonlinearity. Here we experimentally demonstrate an ONN system that simultaneously overcomes all these challenges. We exploit neuron encoding with volume-manufactured micron-scale vertical-cavity surface-emitting laser (VCSEL) transmitter arrays that exhibit high EO conversion (<5 attojoule/symbol with VπV_\pi=4 mV), high operation bandwidth (up to 25 GS/s), and compact footprint (<0.01 mm2^2 per device). Photoelectric multiplication allows low-energy matrix operations at the shot-noise quantum limit. Homodyne detection-based nonlinearity enables nonlinear activation with instantaneous response. The full-system energy efficiency and compute density reach 7 femtojoules per operation (fJ/OP) and 25 TeraOP/(mm2^2\cdot s), both representing a >100-fold improvement over state-of-the-art digital computers, with substantially several more orders of magnitude for future improvement. Beyond neural network inference, its feature of rapid weight updating is crucial for training deep learning models. Our technique opens an avenue to large-scale optoelectronic processors to accelerate machine learning tasks from data centers to decentralized edge devices.Comment: 10 pages, 5 figure

    Attojoule scale computation of large optical neural networks

    No full text
    This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 61-70).The ultra-high bandwidth and low energy cost of modern photonics offers many opportunities for improving both speed and energy efficiency in classical information processing. Recently a new architecture has been proposed which allows for substantial energy reductions in matrix-matrix products by utilizing balanced homodyne detection for computation and optical fan-out for data delivery. In this thesis I work towards the analysis and implementation of both analog and digital optical neural networks. For analog optical neural networks I discuss both the physical implementation of this system as well as an analysis of limits imposed on this system by shot noise, crosstalk, and electro-optic/opto-electronic information conversion. From these results, it is found that femtojoule-scale computation per multiply and accumulate operation is achievable in the near term with further energy gains foreseeable with emerging technology. This thesis also presents a system-scale throughput and energy analysis of digital optical neural networks, which can enable incredibly high data speeds (> 10GHz) with CMOS compatible voltages at weight transmitter power dissipation comparable to a modern CPU.by Alexander Sludds.M. Eng.M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienc

    Large-Scale Optical Neural Networks Based on Photoelectric Multiplication

    No full text
    © 2019 authors. Published by the American Physical Society. Published by the American Physical Society under the terms of the »https://creativecommons.org/licenses/by/4.0/» Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Recent success in deep neural networks has generated strong interest in hardware accelerators to improve speed and energy consumption. This paper presents a new type of photonic accelerator based on coherent detection that is scalable to large (N106) networks and can be operated at high (gigahertz) speeds and very low (subattojoule) energies per multiply and accumulate (MAC), using the massive spatial multiplexing enabled by standard free-space optical components. In contrast to previous approaches, both weights and inputs are optically encoded so that the network can be reprogrammed and trained on the fly. Simulations of the network using models for digit and image classification reveal a "standard quantum limit" for optical neural networks, set by photodetector shot noise. This bound, which can be as low as 50 zJ/MAC, suggests that performance below the thermodynamic (Landauer) limit for digital irreversible computation is theoretically possible in this device. The proposed accelerator can implement both fully connected and convolutional networks. We also present a scheme for backpropagation and training that can be performed in the same hardware. This architecture will enable a new class of ultralow-energy processors for deep learning

    Freely scalable and reconfigurable optical hardware for deep learning

    No full text
    Abstract As deep neural network (DNN) models grow ever-larger, they can achieve higher accuracy and solve more complex problems. This trend has been enabled by an increase in available compute power; however, efforts to continue to scale electronic processors are impeded by the costs of communication, thermal management, power delivery and clocking. To improve scalability, we propose a digital optical neural network (DONN) with intralayer optical interconnects and reconfigurable input values. The path-length-independence of optical energy consumption enables information locality between a transmitter and a large number of arbitrarily arranged receivers, which allows greater flexibility in architecture design to circumvent scaling limitations. In a proof-of-concept experiment, we demonstrate optical multicast in the classification of 500 MNIST images with a 3-layer, fully-connected network. We also analyze the energy consumption of the DONN and find that digital optical data transfer is beneficial over electronics when the spacing of computational units is on the order of >10\,\upmu > 10 μ m

    A scalable optical neural network architecture using coherent detection

    No full text
    © COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only. Storing, proceßing, and learning from data is a central task in both industrial practice and modern science. Recent advances in modern statistical learning, particularly Deep Neural Networks (DNNs), have given record breaking performance on tasks in game playing,1, 2 natural language proceßing,3 computer vision,4 computational biology,5, 6 and many others. The rapid growth of the field has been driven by an increase in the amount of public datasets,7 improvements to algorithms,8 and a substantial growth in computing power.9 In order to perform well on these tasks networks have had to grow in size, learning more complicated statistical features. The training and deployment of these large neural networks has spurred the creation of many neural network accelerators to aid in the computation of these networks.10-12 Existing general purpose computing devices such as CPUs and GPUs are limited both by thermal dißipation per unit area and yield aßociated with large chips.13, 14 The design of Application Specific Integrated circuits (ASICs) has aided in decreasing the energy consumption per workload substantially by limiting the supported operations on chip. An example of this is the first generation tensor proceßing unit (TPU)15 which is able to perform the inference of large convolutional neural networks in datacenter in <10ms with an idle power of 28W and an workload power of 40W. It may seen counterintuitive then that the limiting factor for the implementation of DNNs is not computation, but rather the energy and bandwidth aßociated with reading and writing data from memory as well as the energy cost of moving data inside of the ASIC.15, 16 Several emerging technologies, such as in-memory computing,17 memristive croßbar arrays18 promise increased performance, but these emerging architectures suffer from calibration ißues and limited accuracy.19 Photonics as a field has had tremendous succeß in improving the energy efficiency of data interconnects.20 This has motivated the creation of optical neural networks (ONNs) based on 3D-printed diffractive elements,21 spiking neural networks utilizing ring-resonators,22 reservoir computing23 and nanophotonic circuits.24 However, these architectures have several ißues. 3D-printed diffractive networks and schemes requiring spatial light modulators are non-programmable, meaning that they are unable to perform the task of training. Nanophotonic circuits allow for an O(N2) array of interferometers to be programmed, providing paßive matrix-vector multiplication. However, the large (1mm2) size of on chip electro-optic interferometers means that scaling to an array of 100x100 would require 10; 000mm2 of silicon, demonstrating the limitations of scaling this architecture. To date no architecture has demonstrated high-speed (GHz) speed computation with more than N ≥ 10; 000 neurons. Here we present an architecture that is scalable to N ≥ 106 neurons. The key mechanism of this architecture is balanced homodyne detection. By scaling the architecture to such a large size we show that we can decimate energy costs per operation aßociated with the optical component of this architecture, reaching a bound set by shot noise on the receiving photodetectors which leads to claßification error. We call this bound a standard quantum limit (SQL) which reaches 100zJ/MAC on problems such as MNIST. We also analyze the energy consumption using existing technologies and show that sub-fJ/MAC energy consumption should be poßible. This paper is organized as follows: In section 1 we will discuß the function of this architecture as a matrixmatrix proceßor. In section 2 we will analyze the energy consumption of the architecture. In section 3 we will discuß methods for training and extending the accelerator to a broader scope of problems, namely convolutionally neural networks (CNNs)

    Netcast: Low-Power Edge Computing with WDM-defined Optical Neural Networks

    Full text link
    This paper analyzes the performance and energy efficiency of Netcast, a recently proposed optical neural-network architecture designed for edge computing. Netcast performs deep neural network inference by dividing the computational task into two steps, which are split between the server and (edge) client: (1) the server employs a wavelength-multiplexed modulator array to encode the network's weights onto an optical signal in an analog time-frequency basis, and (2) the client obtains the desired matrix-vector product through modulation and time-integrated detection. The simultaneous use of wavelength multiplexing, broadband modulation, and integration detection allows large neural networks to be run at the client by effectively pushing the energy and memory requirements back to the server. The performance and energy efficiency are fundamentally limited by crosstalk and detector noise, respectively. We derive analytic expressions for these limits and perform numerical simulations to verify these bounds.Comment: 11 pages, 8 figures. Submitted to JSTQE OC2023 Special Issue (invited

    IOI: In-network Optical Inference

    No full text
    corecore